Events

Public defence in Computer Science, M.Sc. (Tech) Teemu Lehtinen

New automated questions research how programming student understands the computer program they created.

Public defence from the Aalto University School of Science, Department of Computer Science.
A program code, a question about the role of a variable, and a flowchart to produce the answer.

Title of the thesis: Questions About Learners' Code: Extending Automated Assessment Towards Program Comprehension

Doctoral student: Teemu Lehtinen
Opponent: Professor Mikko-Jussi Laakso, University of Turku
Custos: Professor Lauri Malmi, Aalto University School of Science, Department of Computer Science

Programming is not only about writing computer programs. In addition, the various steps of designing, constructing, evaluating, and maintaining programs require other skills, such as reading, understanding, and discussing program code. While many introductory programming courses aim to develop all relevant skills, a typical programming exercise asks to write a program for a given specification. Many courses have hundreds of students per teacher and depend on automated systems to produce feedback about the students' programs. Current systems tend to generate feedback that helps students iterate toward acceptable code rather than acquire a deep understanding and the ability to discuss the code they produce. 

This dissertation develops automated, personalized questions about learners' code (QLCs) that target the structure and logic of potentially unique programs that students create. According to our research, as many as 20% of the students, who create a correct program, may answer incorrectly about concepts that are critical to reason how their program works. More than half of the novice students fail to mentally trace the execution of their program, which replicates previous results where example codes were used instead of student-produced code. Our results support previous findings indicating that teachers may underestimate the time students require to reach adequate cognitive capacity to reason about program code. The students who test abrupt changes to their program code and appear to pass exercise criteria by chance often answer QLCs incorrectly and have less success on the course. Therefore, QLCs could produce early warnings about students who need additional support to learn programming. 

The emerging artificial intelligence (AI) based programming tools seem to increase the amount of produced program code while humans in most cases still interact with and evaluate the generated code. This may add to the importance of understanding and discussing programs with human and AI colleagues. We used state-of-the-art AI models to answer program writing exercises for novices and then researched how the models answered QLCs targeting the generated programs. Although AI answered QLCs more correctly than the average novice, they too produced failed reasoning at unpredictable times. Students could study such answers to foster the critical use of AI.

Key words: programming education, introductory programming, automated assessment, unproductive success, program comprehension, fragile knowledge, metacognition programming education, introductory programming, automated assessment, unproductive success, program comprehension, fragile knowledge, metacognition

Thesis available for public display 10 days prior to the defence at: https://aaltodoc.aalto.fi/doc_public/eonly/riiputus/ 

Contact information:


Doctoral theses at the School of Science: https://aaltodoc.aalto.fi/handle/123456789/52 

  • Published:
  • Updated: